717 research outputs found
Beeldenstorm in de Geneeskunde
De Dikke van Dale|1 kent aan het woord “Beeldenstorm” twee gangbare betekenissen
toe. De beeldenstorm is met name bekend als een belangrijk moment in onze
vaderlandse geschiedenis, en refereert aan de “calvinistische volksbeweging in 1566,
vooral in Vlaanderen en Brabant, tegen het vereren van beelden in kerken”. Deze
beeldenstorm ging gepaard met “gewelddadige vernieling en verwoesting van
beelden, schilderijen en andere kostbaarheden en kunstwerken in kerken”. Hoewel
er ook zeker politieke en sociale oorzaken aan de beeldenstorm ten grondslag lagen,
waren de motieven voor de beeldenstorm dus met name van ideële, religieuze aard.
De volksbeweging zette zich af tegen de verering van beelden, en de protserige
rijkdom van de Rooms-katholieke kerk. De gevolgen van de beeldenstorm waren voor
de Nederlanden verstrekkend. De gebeurtenissen leidden ertoe dat Filips II de hertog
van Alva naar de Nederlanden stuurde, om een strafexpeditie uit te voeren, er het
katholieke geloof op te leggen, en het bestuur te centraliseren. Weerstand tegen dit
regime leidde tot de tachtigjarige oorlog. Indirect kan de beeldenstorm dus worden
gezien als aanleiding voor de tachtigjarige oorlog.
Het zich afzetten tegen de gevestigde orde vinden we terug in de tweede betekenis
die de Dikke van Dale aan het woord “beeldenstorm” toekent: “Bestrijding van geijkte
instellingen, conventies, gevestigde opvattingen”. Met deze rede sluit ik me graag aan
bij deze tweede, figuurlijke betekenis van het woord beeldenstorm. Want het zijn met
name beelden die de geneeskunde de laatste eeuwen ingrijpend hebben veranderd.
En het zijn met name beelden die tot belangrijke nieuwe inzichten hebben geleid,
zowel op het gebied van de menselijke anatomie en fysiologie, als in het begrip van
ziekteprocessen. Dit is niet zo verwonderlijk, gezien de kracht van het beeld. We
kennen niet voor niets de uitdrukking: “Eerst zien, dan geloven”. We leren door te zien.
Er bestaat geen krachtiger middel om gevestigde opinies te bestrijden dan beelden
te tonen die met deze opinies in strijd zijn. En er bestaat geen krachtiger middel om
nieuwe inzichten te verwerven in een systeem dan door de processen die zich in het
systeem afspelen af te beelden. Zo ook in de geneeskunde.Rede. In verkorte vorm uitgesproken
ter gelegenheid van het aanvaarden
van het ambt van hoogleraar
met als leeropdracht Medische Beeldverwerking
in de Radiologie en Medische Informatica
aan het Erasmus MC, faculteit van de
Erasmus Universtiteit Rotterdam
op 13 januari 200
PPF - A Parallel Particle Filtering Library
We present the parallel particle filtering (PPF) software library, which
enables hybrid shared-memory/distributed-memory parallelization of particle
filtering (PF) algorithms combining the Message Passing Interface (MPI) with
multithreading for multi-level parallelism. The library is implemented in Java
and relies on OpenMPI's Java bindings for inter-process communication. It
includes dynamic load balancing, multi-thread balancing, and several
algorithmic improvements for PF, such as input-space domain decomposition. The
PPF library hides the difficulties of efficient parallel programming of PF
algorithms and provides application developers with the necessary tools for
parallel implementation of PF methods. We demonstrate the capabilities of the
PPF library using two distributed PF algorithms in two scenarios with different
numbers of particles. The PPF library runs a 38 million particle problem,
corresponding to more than 1.86 GB of particle data, on 192 cores with 67%
parallel efficiency. To the best of our knowledge, the PPF library is the first
open-source software that offers a parallel framework for PF applications.Comment: 8 pages, 8 figures; will appear in the proceedings of the IET Data
Fusion & Target Tracking Conference 201
Vessel enhancing diffusion: a scale space representation of vessel
A method is proposed to enhance vascular structures within the framework
of scale space theory. We combine a smooth vessel filter which is based on
a geometrical analysis of the Hessian's eigensystem, with a non-linear
anisotropic diffusion scheme. The amount and orientation of diffusion
depend on the local vessel likeliness. Vessel enhancing diffusion (VED) is
applied to patient and phantom data and compared to linear, regularized
Perona-Malik, edge and coherence enhancing diffusion. The method performs
better than most of the existing techniques in visualizing vessels with
varying radii and in enhancing vessel appearance. A diameter study on
phantom data shows that VED least affects the accuracy of diameter
measurements. It is shown that using VED as a preprocessing step improves
level set based segmentation of the cerebral vasculature, in particular
segmentation of the smaller vessels of the vasculature
Vessel Axis Tracking Using Topology Constrained Surface Evolution
An approach to three-dimensional vessel axis tracking based on surface evolution is presented. The main idea is to guide the evolution of the surface by analyzing its skeleton topology during evolution, and imposing shape constraints on the topology. For example, the intermediate topology can be processed such that it represents a single vessel segment, a bifurcation, or a more complex vascular topology. The evolving surface is then re-initialized with the newly found topology. Re-initialization is a crucial step since it creates probing behavior of the evolving front, encourages the segmentation process to extract the vascular structure of interest and reduces the risk on leaking of the curve into the background. The method was evaluated in two computed tomography angiography applications: (i) extracting the internal carotid arteries including the region in which they traverse through the skull base, which is challenging due to the proximity of bone structures and overlap in intensity values, and (ii) extracting the carotid bifurcations including many cases in which they are severely stenosed and contain calcifications. The vessel axis was found in 90% (18/20 internal carotids in ten patients) and 70% (14/20 carotid bifurcations in a different set of ten patients) of the cases
Disease Progression Timeline Estimation for Alzheimer's Disease using Discriminative Event Based Modeling
Alzheimer's Disease (AD) is characterized by a cascade of biomarkers becoming
abnormal, the pathophysiology of which is very complex and largely unknown.
Event-based modeling (EBM) is a data-driven technique to estimate the sequence
in which biomarkers for a disease become abnormal based on cross-sectional
data. It can help in understanding the dynamics of disease progression and
facilitate early diagnosis and prognosis. In this work we propose a novel
discriminative approach to EBM, which is shown to be more accurate than
existing state-of-the-art EBM methods. The method first estimates for each
subject an approximate ordering of events. Subsequently, the central ordering
over all subjects is estimated by fitting a generalized Mallows model to these
approximate subject-specific orderings. We also introduce the concept of
relative distance between events which helps in creating a disease progression
timeline. Subsequently, we propose a method to stage subjects by placing them
on the estimated disease progression timeline. We evaluated the proposed method
on Alzheimer's Disease Neuroimaging Initiative (ADNI) data and compared the
results with existing state-of-the-art EBM methods. We also performed extensive
experiments on synthetic data simulating the progression of Alzheimer's
disease. The event orderings obtained on ADNI data seem plausible and are in
agreement with the current understanding of progression of AD. The proposed
patient staging algorithm performed consistently better than that of
state-of-the-art EBM methods. Event orderings obtained in simulation
experiments were more accurate than those of other EBM methods and the
estimated disease progression timeline was observed to correlate with the
timeline of actual disease progression. The results of these experiments are
encouraging and suggest that discriminative EBM is a promising approach to
disease progression modeling
IT Infrastructure to Support the Secondary Use of Routinely Acquired Clinical Imaging Data for Research
We propose an infrastructure for the automated anonymization, extraction and processing of image data stored in clinical data repositories to make routinely acquired imaging data available for research purposes. The automated system, which was tested in the context of analyzing routinely acquired MR brain imaging data, consists of four modules: subject selection using PACS query, anonymization of privacy sensitive information and removal of facial features, quality assurance on DICOM header and image information, and quantitative imaging biomarker extraction. In total, 1,616 examinations were selected based on the following MRI scanning protocols: dementia protocol (246), multiple sclerosis protocol (446) and open question protocol (924). We evaluated the effectiveness of the infrastructure in accessing and successfully extracting biomarkers from routinely acquired clinical imaging data. To examine the validity, we compared brain volumes between patient groups with positive and negative diagnosis, according to the patient reports. Overall, success rates of image data retrieval and automatic processing were 82.5Â %, 82.3Â % and 66.2Â % for the three protocol groups respectively, indicating that a large percentage of routinely acquired clinical imaging data can be used for brain volumetry research, despite image heterogeneity. In line with the literature, brain volumes were found to be significantly smaller (p-value <0.001) in patients with a positive diagnosis of dementia (915Â ml) compared to patients with a negative diagnosis (939Â ml). This study demonstrates that quantitative image biomarkers such as intracranial and brain volume can be extracted from routinely acquired clinical imaging data. This enables secondary use of clinical images for research into quantitative biomarkers at a hitherto unprecedented scale
Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity: Applications in Typical Ageing and Schizophrenia
Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uniand bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, "bi-modal comparison plots" show the distribution of uni-and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using "worm plots". Group differences in connections are examined with an existing visualization, the "connectogram". These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the "Statistical Analysis of Minimum cost path based Structural Connectivity" method and the average fractional anisotropy along the fiber. The functional measures were Pearson's correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies
Multiple sparse representations classification
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy.We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level
- …